Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Neurol Res Pract ; 6(1): 15, 2024 Mar 07.
Artículo en Inglés | MEDLINE | ID: mdl-38449051

RESUMEN

INTRODUCTION: In Multiple Sclerosis (MS), patients´ characteristics and (bio)markers that reliably predict the individual disease prognosis at disease onset are lacking. Cohort studies allow a close follow-up of MS histories and a thorough phenotyping of patients. Therefore, a multicenter cohort study was initiated to implement a wide spectrum of data and (bio)markers in newly diagnosed patients. METHODS: ProVal-MS (Prospective study to validate a multidimensional decision score that predicts treatment outcome at 24 months in untreated patients with clinically isolated syndrome or early Relapsing-Remitting-MS) is a prospective cohort study in patients with clinically isolated syndrome (CIS) or Relapsing-Remitting (RR)-MS (McDonald 2017 criteria), diagnosed within the last two years, conducted at five academic centers in Southern Germany. The collection of clinical, laboratory, imaging, and paraclinical data as well as biosamples is harmonized across centers. The primary goal is to validate (discrimination and calibration) the previously published DIFUTURE MS-Treatment Decision score (MS-TDS). The score supports clinical decision-making regarding the options of early (within 6 months after study baseline) platform medication (Interferon beta, glatiramer acetate, dimethyl/diroximel fumarate, teriflunomide), or no immediate treatment (> 6 months after baseline) of patients with early RR-MS and CIS by predicting the probability of new or enlarging lesions in cerebral magnetic resonance images (MRIs) between 6 and 24 months. Further objectives are refining the MS-TDS score and providing data to identify new markers reflecting disease course and severity. The project also provides a technical evaluation of the ProVal-MS cohort within the IT-infrastructure of the DIFUTURE consortium (Data Integration for Future Medicine) and assesses the efficacy of the data sharing techniques developed. PERSPECTIVE: Clinical cohorts provide the infrastructure to discover and to validate relevant disease-specific findings. A successful validation of the MS-TDS will add a new clinical decision tool to the armamentarium of practicing MS neurologists from which newly diagnosed MS patients may take advantage. Trial registration ProVal-MS has been registered in the German Clinical Trials Register, `Deutsches Register Klinischer Studien` (DRKS)-ID: DRKS00014034, date of registration: 21 December 2018; https://drks.de/search/en/trial/DRKS00014034.

2.
Ther Adv Neurol Disord ; 16: 17562864231161892, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36993939

RESUMEN

Background: Multiple sclerosis (MS) is a chronic neuroinflammatory disease affecting about 2.8 million people worldwide. Disease course after the most common diagnoses of relapsing-remitting multiple sclerosis (RRMS) and clinically isolated syndrome (CIS) is highly variable and cannot be reliably predicted. This impairs early personalized treatment decisions. Objectives: The main objective of this study was to algorithmically support clinical decision-making regarding the options of early platform medication or no immediate treatment of patients with early RRMS and CIS. Design: Retrospective monocentric cohort study within the Data Integration for Future Medicine (DIFUTURE) Consortium. Methods: Multiple data sources of routine clinical, imaging and laboratory data derived from a large and deeply characterized cohort of patients with MS were integrated to conduct a retrospective study to create and internally validate a treatment decision score [Multiple Sclerosis Treatment Decision Score (MS-TDS)] through model-based random forests (RFs). The MS-TDS predicts the probability of no new or enlarging lesions in cerebral magnetic resonance images (cMRIs) between 6 and 24 months after the first cMRI. Results: Data from 65 predictors collected for 475 patients between 2008 and 2017 were included. No medication and platform medication were administered to 277 (58.3%) and 198 (41.7%) patients. The MS-TDS predicted individual outcomes with a cross-validated area under the receiver operating characteristics curve (AUROC) of 0.624. The respective RF prediction model provides patient-specific MS-TDS and probabilities of treatment success. The latter may increase by 5-20% for half of the patients if the treatment considered superior by the MS-TDS is used. Conclusion: Routine clinical data from multiple sources can be successfully integrated to build prediction models to support treatment decision-making. In this study, the resulting MS-TDS estimates individualized treatment success probabilities that can identify patients who benefit from early platform medication. External validation of the MS-TDS is required, and a prospective study is currently being conducted. In addition, the clinical relevance of the MS-TDS needs to be established.

3.
Stud Health Technol Inform ; 289: 240-243, 2022 Jan 14.
Artículo en Inglés | MEDLINE | ID: mdl-35062137

RESUMEN

Health data from hospital information systems are valuable sources for medical research but have known issues in terms of data quality. In a nationwide data integration project in Germany, health care data from all participating university hospitals are being pooled and refined in local centers. As there is currently no overarching agreement on how to deal with errors and implausibilities, meetings were held to discuss the current status and the need to develop consensual measures at the organizational and technical levels. This paper analyzes the discovered similarities and differences. The result shows that although data quality checks are carried out at all sites, there is a lack of both centrally coordinated data quality indicators and a formalization of plausibility rules as well as a repository for automatic querying of the rules, for example in ETL processes.


Asunto(s)
Investigación Biomédica , Informática Médica , Exactitud de los Datos , Atención a la Salud , Alemania , Humanos
4.
JMIR Med Inform ; 8(7): e15918, 2020 Jul 21.
Artículo en Inglés | MEDLINE | ID: mdl-32706673

RESUMEN

BACKGROUND: Modern data-driven medical research provides new insights into the development and course of diseases and enables novel methods of clinical decision support. Clinical and translational data warehouses, such as Informatics for Integrating Biology and the Bedside (i2b2) and tranSMART, are important infrastructure components that provide users with unified access to the large heterogeneous data sets needed to realize this and support use cases such as cohort selection, hypothesis generation, and ad hoc data analysis. OBJECTIVE: Often, different warehousing platforms are needed to support different use cases and different types of data. Moreover, to achieve an optimal data representation within the target systems, specific domain knowledge is needed when designing data-loading processes. Consequently, informaticians need to work closely with clinicians and researchers in short iterations. This is a challenging task as installing and maintaining warehousing platforms can be complex and time consuming. Furthermore, data loading typically requires significant effort in terms of data preprocessing, cleansing, and restructuring. The platform described in this study aims to address these challenges. METHODS: We formulated system requirements to achieve agility in terms of platform management and data loading. The derived system architecture includes a cloud infrastructure with unified management interfaces for multiple warehouse platforms and a data-loading pipeline with a declarative configuration paradigm and meta-loading approach. The latter compiles data and configuration files into forms required by existing loading tools, thereby automating a wide range of data restructuring and cleansing tasks. We demonstrated the fulfillment of the requirements and the originality of our approach by an experimental evaluation and a comparison with previous work. RESULTS: The platform supports both i2b2 and tranSMART with built-in security. Our experiments showed that the loading pipeline accepts input data that cannot be loaded with existing tools without preprocessing. Moreover, it lowered efforts significantly, reducing the size of configuration files required by factors of up to 22 for tranSMART and 1135 for i2b2. The time required to perform the compilation process was roughly equivalent to the time required for actual data loading. Comparison with other tools showed that our solution was the only tool fulfilling all requirements. CONCLUSIONS: Our platform significantly reduces the efforts required for managing clinical and translational warehouses and for loading data in various formats and structures, such as complex entity-attribute-value structures often found in laboratory data. Moreover, it facilitates the iterative refinement of data representations in the target platforms, as the required configuration files are very compact. The quantitative measurements presented are consistent with our experiences of significantly reduced efforts for building warehousing platforms in close cooperation with medical researchers. Both the cloud-based hosting infrastructure and the data-loading pipeline are available to the community as open source software with comprehensive documentation.

5.
BMC Med Inform Decis Mak ; 20(1): 29, 2020 02 11.
Artículo en Inglés | MEDLINE | ID: mdl-32046701

RESUMEN

BACKGROUND: Modern data driven medical research promises to provide new insights into the development and course of disease and to enable novel methods of clinical decision support. To realize this, machine learning models can be trained to make predictions from clinical, paraclinical and biomolecular data. In this process, privacy protection and regulatory requirements need careful consideration, as the resulting models may leak sensitive personal information. To counter this threat, a wide range of methods for integrating machine learning with formal methods of privacy protection have been proposed. However, there is a significant lack of practical tools to create and evaluate such privacy-preserving models. In this software article, we report on our ongoing efforts to bridge this gap. RESULTS: We have extended the well-known ARX anonymization tool for biomedical data with machine learning techniques to support the creation of privacy-preserving prediction models. Our methods are particularly well suited for applications in biomedicine, as they preserve the truthfulness of data (e.g. no noise is added) and they are intuitive and relatively easy to explain to non-experts. Moreover, our implementation is highly versatile, as it supports binomial and multinomial target variables, different types of prediction models and a wide range of privacy protection techniques. All methods have been integrated into a sound framework that supports the creation, evaluation and refinement of models through intuitive graphical user interfaces. To demonstrate the broad applicability of our solution, we present three case studies in which we created and evaluated different types of privacy-preserving prediction models for breast cancer diagnosis, diagnosis of acute inflammation of the urinary system and prediction of the contraceptive method used by women. In this process, we also used a wide range of different privacy models (k-anonymity, differential privacy and a game-theoretic approach) as well as different data transformation techniques. CONCLUSIONS: With the tool presented in this article, accurate prediction models can be created that preserve the privacy of individuals represented in the training set in a variety of threat scenarios. Our implementation is available as open source software.


Asunto(s)
Confidencialidad , Anonimización de la Información , Sistemas de Apoyo a Decisiones Clínicas , Modelos Estadísticos , Programas Informáticos , Investigación Biomédica , Humanos , Aprendizaje Automático , Curva ROC , Reproducibilidad de los Resultados
6.
Stud Health Technol Inform ; 267: 207-214, 2019 Sep 03.
Artículo en Inglés | MEDLINE | ID: mdl-31483274

RESUMEN

Modern medical research requires access to patient-level data of significant detail and volume. In this context, privacy concerns and legal requirements demand careful consideration. Data anonymization, which means that data is transformed to reduce privacy risks, is an important building block of data protection concepts. However, common methods of data anonymization often fail to protect data against inference of sensitive attribute values (also called attribute disclosure). Measures against such attacks have been developed, but it has been argued that they are of little practical relevance, as they involve significant data transformations which reduce output data utility to an unacceptable degree. In this article, we present an experimental study of the degree of protection and impact on data utility provided by different approaches for protecting biomedical data from attribute disclosure. We quantified the utility and privacy risks of datasets that have been protected using different anonymization methods and parameterizations. We put the results into relation with trivial baseline approaches, visualized them in the form of risk-utility curves and analyzed basic statistical properties of the sensitive attributes (e.g. the skewness of their distribution). Our results confirm that it is difficult to protect data from attribute disclosure, but they also indicate that it can be possible to achieve reasonable degrees of protection when appropriate methods are chosen based on data characteristics. While it is hard to give general recommendations, the approach presented in this article and the tools that we have used can be helpful for deciding how a given dataset can best be protected in a specific usage scenario.


Asunto(s)
Investigación Biomédica , Revelación , Seguridad Computacional , Anonimización de la Información , Humanos , Privacidad
7.
Int J Med Inform ; 126: 72-81, 2019 06.
Artículo en Inglés | MEDLINE | ID: mdl-31029266

RESUMEN

BACKGROUND: Modern data-driven approaches to medical research require patient-level information at comprehensive depth and breadth. To create the required big datasets, information from disparate sources can be integrated into clinical and translational warehouses. This is typically implemented with Extract, Transform, Load (ETL) processes, which access, harmonize and upload data into the analytics platform. OBJECTIVE: Privacy-protection needs careful consideration when data is pooled or re-used for secondary purposes, and data anonymization is an important protection mechanism. However, common ETL environments do not support anonymization, and common anonymization tools cannot easily be integrated into ETL workflows. The objective of the work described in this article was to bridge this gap. METHODS: Our main design goals were (1) to base the anonymization process on expert-level risk assessment methodologies, (2) to use transformation methods which preserve both the truthfulness of data and its schematic properties (e.g. data types), (3) to implement a method which is easy to understand and intuitive to configure, and (4) to provide high scalability. RESULTS: We designed a novel and efficient anonymization process and implemented a plugin for the Pentaho Data Integration (PDI) platform, which enables integrating data anonymization and re-identification risk analyses directly into ETL workflows. By combining different instances into a single ETL process, data can be protected from multiple threats. The plugin supports very large datasets by leveraging the streaming-based processing model of the underlying platform. We present results of an extensive experimental evaluation and discuss successful applications. CONCLUSIONS: Our work shows that expert-level anonymization methodologies can be integrated into ETL workflows. Our implementation is available under a non-restrictive open source license and it overcomes several limitations of other data anonymization tools.


Asunto(s)
Investigación Biomédica , Privacidad , Algoritmos , Conjuntos de Datos como Asunto , Humanos
8.
IEEE J Biomed Health Inform ; 22(2): 611-622, 2018 03.
Artículo en Inglés | MEDLINE | ID: mdl-28358693

RESUMEN

The sharing of sensitive personal health data is an important aspect of biomedical research. Methods of data de-identification are often used in this process to trade the granularity of data off against privacy risks. However, traditional approaches, such as HIPAA safe harbor or -anonymization, often fail to provide data with sufficient quality. Alternatively, data can be de-identified only to a degree which still allows us to use it as required, e.g., to carry out specific analyses. Controlled environments, which restrict the ways recipients can interact with the data, can then be used to cope with residual risks. The contributions of this article are twofold. First, we present a method for implementing controlled data sharing environments and analyze its privacy properties. Second, we present a de-identification method which is specifically suited for sanitizing health data which is to be shared in such environments. Traditional de-identification methods control the uniqueness of records in a dataset. The basic idea of our approach is to reduce the probability that a record in a dataset has characteristics which are unique within the underlying population. As the characteristics of the population are typically not known, we have implemented a pragmatic solution in which properties of the population are modeled with statistical methods. We have further developed an accompanying process for evaluating and validating the degree of protection provided. The results of an extensive experimental evaluation show that our approach enables the safe sharing of high-quality data and that it is highly scalable.


Asunto(s)
Confidencialidad , Bases de Datos Factuales , Difusión de la Información/métodos , Registros Médicos , Algoritmos , Investigación Biomédica , Humanos
9.
BMC Med Inform Decis Mak ; 17(1): 30, 2017 03 23.
Artículo en Inglés | MEDLINE | ID: mdl-28330491

RESUMEN

BACKGROUND: Translational researchers need robust IT solutions to access a range of data types, varying from public data sets to pseudonymised patient information with restricted access, provided on a case by case basis. The reason for this complication is that managing access policies to sensitive human data must consider issues of data confidentiality, identifiability, extent of consent, and data usage agreements. All these ethical, social and legal aspects must be incorporated into a differential management of restricted access to sensitive data. METHODS: In this paper we present a pilot system that uses several common open source software components in a novel combination to coordinate access to heterogeneous biomedical data repositories containing open data (open access) as well as sensitive data (restricted access) in the domain of biobanking and biosample research. Our approach is based on a digital identity federation and software to manage resource access entitlements. RESULTS: Open source software components were assembled and configured in such a way that they allow for different ways of restricted access according to the protection needs of the data. We have tested the resulting pilot infrastructure and assessed its performance, feasibility and reproducibility. CONCLUSIONS: Common open source software components are sufficient to allow for the creation of a secure system for differential access to sensitive data. The implementation of this system is exemplary for researchers facing similar requirements for restricted access data. Here we report experience and lessons learnt of our pilot implementation, which may be useful for similar use cases. Furthermore, we discuss possible extensions for more complex scenarios.


Asunto(s)
Bancos de Muestras Biológicas/normas , Investigación Biomédica/normas , Seguridad Computacional/normas , Conjuntos de Datos como Asunto , Investigación Biomédica Traslacional/normas , Humanos , Proyectos Piloto
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...